Picture for Pietro Barbiero

Pietro Barbiero

Università della Svizzera Italiana and University of Cambridge

Federated Concept-Based Models: Interpretable models with distributed supervision

Add code
Feb 04, 2026
Viaarxiv icon

Interpretability in Deep Time Series Models Demands Semantic Alignment

Add code
Feb 02, 2026
Viaarxiv icon

Mixture of Concept Bottleneck Experts

Add code
Feb 02, 2026
Viaarxiv icon

Actionable Interpretability Must Be Defined in Terms of Symmetries

Add code
Jan 19, 2026
Viaarxiv icon

If Concept Bottlenecks are the Question, are Foundation Models the Answer?

Add code
Apr 29, 2025
Viaarxiv icon

Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts

Add code
Apr 24, 2025
Figure 1 for Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Figure 2 for Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Figure 3 for Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Figure 4 for Avoiding Leakage Poisoning: Concept Interventions Under Distribution Shifts
Viaarxiv icon

Deferring Concept Bottleneck Models: Learning to Defer Interventions to Inaccurate Experts

Add code
Mar 20, 2025
Viaarxiv icon

Logic Explanation of AI Classifiers by Categorical Explaining Functors

Add code
Mar 20, 2025
Viaarxiv icon

Causally Reliable Concept Bottleneck Models

Add code
Mar 06, 2025
Viaarxiv icon

Neural Interpretable Reasoning

Add code
Feb 17, 2025
Figure 1 for Neural Interpretable Reasoning
Figure 2 for Neural Interpretable Reasoning
Viaarxiv icon